OCR at the Internet Archive with Tesseract and hOCR¶
|authors:||Merlijn Wajer <email@example.com>|
This document outlines the OCR (Optical Character Recognition) module and its features as used to perform optical text recognition on Internet Archive items and elaborates on design decisions and how various solutions were picked.
The Internet Archive had been using proprietary OCR technology for many years, but decided to move to an entirely open source stack after evaluating the various open source software OCR offerings, settling on Tesseract but keeping an eye out for alternative engines.
This transition to Tesseract was completed near the end of 2020.
There are a few open standards when it comes to defining OCR results, with the main contenders being:
The Internet Archive settled on using hOCR. At the time of writing, Tesseract does support outputting ALTO XML, but PAGE XML was not yet supported. hOCR was deemed sufficiently simple and flexible, with the added advantage that it is XHTML, which allows for viewing the documents in a browser. Various hOCR tools and libraries exist, as do hOCR viewers, such as hocrviewer-miradoc and hocrjs.
We intend to keep around the older (pre-tesseract) OCR results, but will attempt to convert them to hOCR as well, providing a hOCR file for each item with OCR results, no matter the OCR engine. The code to convert those files can also be found in archive-hocr-tools.
After an Internet Archive Item has been uploaded, various processes kick in to analyze the content and provide derivative files, one of those being the OCR file. The output OCR format was changed from the old proprietary format to hOCR, as explained earlier.
Barring any failures in the OCR process, after upload, every item will get one
*_hocr.html files which represent the results of OCR jobs. Each
*_hocr.html file contains results for all pages in one set of images (book,
PDF, or otherwise), with text, bounding boxes, and confidence at the word level.
For those seeking more detailed OCR results, each
_hocr.html file should
also have a corresponding
*_chocr.html.gz file, with character-level
granularity. (The exact meaning of “character” differs, of course, per script or
From these hOCR files, two additional OCR files get created:
*_hocr_pageindex.json.gz: a simple JSON array annotating where each individual page element starts in the
*_hocr.htmlfile, enabling quick fast-forwarding to an individual page without parsing all the XML.
*_hocr_searchtext.txt.gz: a plaintext file that is ingested by the full text search engine.
Additional generated content¶
*_hocr.html file, even more files are generated, for
accessibility and compatibility reasons:
*_djvu.xml: a modified version of the DjVu XML standard, these files can also be used to read OCR results, but the recommendation is to instead parse the hOCR files.
*_djvu.txt:, a human-readable plaintext version of the generated
Archive.org items have metadata, and the metadata can dictate how the items are
treated. For example, the
language field determines what languages will be
used when OCRing the content of the item. Upon completion, the OCR process will
write various metadata values that potentially enable document discovery through
metadata search. This section covers all the metadata relevant to the OCR process.
Metadata and input for the OCR process¶
language metadata key describes the language(s) the documents
contained in the item are written in. Accepted values are standard three letter
ISO-639 codes, MARC languages codes, and canonical names of a language. So in
the case of English, either
English would be accepted.
Additionally, Tesseract language codes are accepted, and a list of special-case
language mappings can be found in section Supported languages.
language metadata value can be repeated, meaning that multiple languages
can be provided. If this is the case, the OCR module will perform OCR using the
multiple provided languages.
language value is set to the literal string
None, then no OCR
will be performed, and every page will instead be treated as a page with no
language metadata key is not provided, or is set to one of (
mul), then the OCR system will perform what is known as the
autonomous mode, which is explained in detail later on.
language is set to an invalid or unknown language, the OCR module
will also perform the autonomous mode instead, attempting to guess the script
and language. (In addition, it will also set either
ocr_unsupported_language in the item (and resulting hOCR file) metadata to
the languages that are considered invalid or unsupported.
ocr_default_parameters metadata key allows specifying
specific OCR module parameters. This only has effect when it is set in the
collection of an item, setting it on an item itself has no effect. See
task arguments for an explanation of all the possible task arguments.
Scandata is not a metadata key, but rather a XML file containing specific per-image information, including if the image should be included in any of the produced formats. The module will find, parse and honours these files if they exist.
Scandata files are marked with the format
Metadata written by the OCR module¶
The following keys are written to the item metadata, as well as to the files metadata of the generated hOCR files.
If an item contains multiple stacks of images, pdfs, or otherwise, then the
item-level metadata only represents the values of the stack of images that was
OCRd, in which case the hOCR file-level metadata should be inspected for correct
values. This metadata is only written to the files metadata starting with module
This metadata key contains the name and version of the OCR engine that was used
to produce to OCR content. If a language metadata key was found to be not
ocr metadata key also contains the text
ocr: "tesseract 4.1.1"
This metadata key describes the parameters passed to the OCR engine (Tesseract) that were ultimately used to OCR the item contents. This can be used to spot potential problems.
ocr_parameters: "-l eng"
This metadata key describes the version of the OCR module that was used to perform the resulting hOCR file. This can be used to potentially perform OCR on items again if problems are found in a specific version.
The script or set of script that is/are most prominent on the images. This value is typically based on sampling the content and internally relies on Tesseract’s script detection module. Please refer to Tesseract for the list of currently supported scripts.
This metadata key describes the confidence in the various
ocr_detected_script keys; if multiple values are present then the ordering
ocr_detected_script ordering. The confidence value is expressed
as a floating point number between 0 and 1.
The language that is most prominent after OCR. The functionality is provided by langid.py and is expressed as ISO639-1 language codes, but might be changed to ISO639-3 codes in the future.
This metadata key describes the confidence in the detected language (ocr_detected_lang). The confidence value is expressed as a floating point number between 0 and 1.
Contains the literal value
true if the OCR was a result of an autonomous
mode OCR run. Otherwise, the key is not present.
If a value in the language field is not supported, this field will be set to the unsupported value(s).
If a value in the language field is considered invalid, this field will be set to the invalid value(s).
This value gets set to
true if the hOCR document was created from an
If OCRing a specific page fails, this value will get set to the error that caused the page failure. Currently can get set to:
Task arguments typically cannot be supplied manually, but can be set as part of the ocr_default_parameters value of a collection.
Perform script detection by sampling, default is on (
1). Stores the result
in the ocr_detected_script metadata field.
Perform full script detection, default is off (
0). Stores the result in the
ocr_detected_script metadata field.
Use the detected script in the OCR step, default is off (
Detect the language based upon the OCR’d corpus and store it in the
ocr_detected_lang metadata field. Default is on (
Set the maximum running time (in seconds) for any given page, default is
1800 seconds). Applies to both script detection and the actual OCR
process. If the timeout is set to
0, no timeout is used.
The OCR module writes various metadata keys to items (see Metadata written by the OCR module), which are searchable fields in Archive.org. For example, to find all documents where the detected script was Fraktur, one could search for the following:
Likewise, to find all items which were processed with the Autonomous mode, one could search for the following:
To surface all items with a detected language of French, but with the language metadata key set to English, one could try something like this:
ocr_detected_lang:fr AND (language:english OR language:eng)
Summary of the OCR module modes and functionality¶
This section expands a little on the heuristics and computations performed by the OCR module. In-depth analysis of the code is outside of the scope of this document.
The normal mode of operation involves mapping the values in the language metadata into Tesseract language names. If this succeeds, the images are extracted and analysed by the script-detection module (if enabled). The confidence for each script on each page is summed up; scripts with low confidence are filtered out.
After that step, each image is OCR’d with all the provided languages, producing a hOCR file for each image. These files are then concatenated into a single hOCR file containing all the pages.
Finally, the extracted text corpus is analysed by the language detected module (not on page-by-page basis).
The autonomous mode is a multi-pass OCR mode where no knowledge of the script or language of the content is assumed or known. This is computationally more intensive. In most simple cases, this is a very effective way to analyse content that is provided without the right metadata. In some cases, the result of the module ranges from sub-optimal to unusable, depending on the script and language of the content - especially unsupported scripts will likely not turn out well.
The first step in this process is analysing every image with the script detection module from Tesseract. At the end of this step, one or a few scripts are selected for the first OCR pass (Tesseract can perform OCR with just a script as data files).
With the detected scripts, every page is OCR’d with the detected scripts. Once that has finished, the language detection module is ran on each page in an attempt to figure out the various languages the content is written in. Using some simple heuristics, a final set of languages is then selected for the second OCR pass.
The second OCR pass performs OCR as in the Normal operation, using the detected languages as input languages.
Conversion from Abbyy XML¶
If an Abbyy XML file is present, the module can instead create a hOCR from an
_abbyy.gz file. Whether this happens or not is decided externally
sourceFormat provided to the module).
In the future we hope to provide an extensive list here. For now however, one could take a peek at the code: https://git.archive.org/www/tesseract/-/blob/master/language.py
Tesseract module 0.0.13¶
- Switch to archive-hocr-tools 1.1.4
- Add initial support for converting from Abbyy
Tesseract module 0.0.12¶
- Switch to Tesseract 5 alpha
- Handle items without
- Automatically use Fraktur script if detected with a confidence greater than
- Switch to archive-hocr-tools 1.1.3
Tesseract module 0.0.11¶
- Metadata is now also written to per-file metadata (_files.xml)
- Move to python-derivermodule 1.0.0
Tesseract module 0.0.10¶
- Various language mapping additions
- More clear error messages when scandata doesn’t match,
- Bugfix for backslashes being rewritten to forward slashes in Leptonica, which were reported and fixed prompty: https://github.com/DanBloomberg/leptonica/issues/558
Tesseract module 0.0.9¶
- Additional language mappings, supporting more exotic language codes, and some different spellings of language codes. (Based on an updated list from Tesseract, some languages from the old module, and some others)
- Module will now process items with invalid or unsupported language codes, where possible. The autonomous mode will be turned on in these cases, and the metadata will reflect the invalid or unsupported languages in ocr_invalid_language and ocr_unsupported_language. If the script cannot be detected, the module will enter the “cannot ocr path”
- The “cannot ocr path” will not perform (further) OCR on the item. The ocr metadata will contain “language not currently OCRable” (the same as the old module), and the hOCR file will contain empty pages and a hint in a <meta> field that OCR has not been run.
- Items that have “handwritten” in the language, or None (literal string) in the language field will not be OCR’d via the “cannot ocr path”. bugfix: hocr-combine-stream did not honour the ocr-system and ocr-capabilities <meta> keywords. This has now been fixed, but is still unfortunate.
Tesseract module 0.0.8¶
- Introduces the Autonomous mode.
- Support more languages: kur and tgl are not actually in Tesseract; replace them with kmr (not an exact replacement, but better than nothing for “Kurdish” kmr is Latin, kur used to be Arabic but is not available currently). tgl is Tagalog which was renamed to fil (Filipino).
- Fixup invalid metadata in items (not caused by us, but we can fix it, discussed with Hank)
- Add Fraktur for all languages that we know have used Fraktur in the past (taken from Wikipedia)
- ocr_detected_script and ocr_detected_script_conf can now have multiple values (only in autonomous mode at the moment)
Tesseract module 0.0.7¶
- Support for collection default parameters (see ocr_default_parameters)
- Image validation checks are loosened up as they were too strict.
- A division by zero has been fixed when the confidence in the script detected was 0.
- Ships with improved Fraktur model
Tesseract module 0.0.6¶
- Script detection confidence is now added, with normalisation based on all the collected confidence values (metadata field: ocr_script_detect_conf). This field will be useful in the upcoming autonomous mode, where the module will be able to figure out the script and potentially even the language.
- Task arguments support for scripting flexibility.
- Switched to hocr-tools package: https://git.archive.org/merlijn/archive-hocr-tools
- Code refactoring for the upcoming autonomous mode
Tesseract module 0.0.5¶
- Script detection by sampling, not full analysis
Tesseract module 0.0.4¶
- Streaming XML version of hOCR combination